Goto

Collaborating Authors

 Cumberland County






Scaling Latent Reasoning via Looped Language Models

Zhu, Rui-Jie, Wang, Zixuan, Hua, Kai, Zhang, Tianyu, Li, Ziniu, Que, Haoran, Wei, Boyi, Wen, Zixin, Yin, Fan, Xing, He, Li, Lu, Shi, Jiajun, Ma, Kaijing, Li, Shanda, Kergan, Taylor, Smith, Andrew, Qu, Xingwei, Hui, Mude, Wu, Bohong, Min, Qiyang, Huang, Hongzhi, Zhou, Xun, Ye, Wei, Liu, Jiaheng, Yang, Jian, Shi, Yunfeng, Lin, Chenghua, Zhao, Enduo, Cai, Tianle, Zhang, Ge, Huang, Wenhao, Bengio, Yoshua, Eshraghian, Jason

arXiv.org Artificial Intelligence

Modern LLMs are trained to "think" primarily via explicit text generation, such as chain-of-thought (CoT), which defers reasoning to post-training and under-leverages pre-training data. We present and open-source Ouro, named after the recursive Ouroboros, a family of pre-trained Looped Language Models (LoopLM) that instead build reasoning into the pre-training phase through (i) iterative computation in latent space, (ii) an entropy-regularized objective for learned depth allocation, and (iii) scaling to 7.7T tokens. Ouro 1.4B and 2.6B models enjoy superior performance that match the results of up to 12B SOTA LLMs across a wide range of benchmarks. Through controlled experiments, we show this advantage stems not from increased knowledge capacity, but from superior knowledge manipulation capabilities. We also show that LoopLM yields reasoning traces more aligned with final outputs than explicit CoT. We hope our results show the potential of LoopLM as a novel scaling direction in the reasoning era. Our model is available here: http://ouro-llm.github.io.


From Hubs to Deserts: Urban Cultural Accessibility Patterns with Explainable AI

Pranto, Protik Bose, Islam, Minhazul, Saha, Ripon Kumar, Rivera, Abimelec Mercado, Abbasov, Namig

arXiv.org Artificial Intelligence

Cultural infrastructures, such as libraries, museums, theaters, and galleries, support learning, civic life, health, and local economies, yet access is uneven across cities. We present a novel, scalable, and open-data framework to measure spatial equity in cultural access. We map cultural infrastructures and compute a metric called Cultural Infrastructure Accessibility Score (CIAS) using exponential distance decay at fine spatial resolution, then aggregate the score per capita and integrate socio-demographic indicators. Interpretable tree-ensemble models with SHapley Additive exPlanation (SHAP) are used to explain associations between accessibility, income, density, and tract-level racial/ethnic composition. Results show a pronounced core-periphery gradient, where non-library cultural infrastructures cluster near urban cores, while libraries track density and provide broader coverage. Non-library accessibility is modestly higher in higher-income tracts, and library accessibility is slightly higher in denser, lower-income areas.


Individualized Cognitive Simulation in Large Language Models: Evaluating Different Cognitive Representation Methods

Zhang, Tianyi, Zhou, Xiaolin, Wang, Yunzhe, Cambria, Erik, Traum, David, Mao, Rui

arXiv.org Artificial Intelligence

Individualized cognitive simulation (ICS) aims to build computational models that approximate the thought processes of specific individuals. While large language models (LLMs) convincingly mimic surface-level human behavior such as role-play, their ability to simulate deeper individualized cognitive processes remains poorly understood. To address this gap, we introduce a novel task that evaluates different cognitive representation methods in ICS. We construct a dataset from recently published novels (later than the release date of the tested LLMs) and propose an 11-condition cognitive evaluation framework to benchmark seven off-the-shelf LLMs in the context of authorial style emulation. We hypothesize that effective cognitive representations can help LLMs generate storytelling that better mirrors the original author. Thus, we test different cognitive representations, e.g., linguistic features, concept mappings, and profile-based information. Results show that combining conceptual and linguistic features is particularly effective in ICS, outperforming static profile-based cues in overall evaluation. Importantly, LLMs are more effective at mimicking linguistic style than narrative structure, underscoring their limits in deeper cognitive simulation. These findings provide a foundation for developing AI systems that adapt to individual ways of thinking and expression, advancing more personalized and human-aligned creative technologies.